Goto

Collaborating Authors

 wasserstein distance




Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee Y uanshi Liu, Cong Fang, Tong Zhang School of Intelligence Science and Technology, Peking University

Neural Information Processing Systems

Sampling from a high-dimensional distribution serves as one of the key components in statistics, machine learning, and scientific computing, and constitutes the foundation of the fields including Bayesian statistics and generative models [Liu and Liu, 2001, Brooks et al., 2011, Song et al.,


Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee Y uanshi Liu, Cong Fang, Tong Zhang School of Intelligence Science and Technology, Peking University

Neural Information Processing Systems

Sampling from a high-dimensional distribution serves as one of the key components in statistics, machine learning, and scientific computing, and constitutes the foundation of the fields including Bayesian statistics and generative models [Liu and Liu, 2001, Brooks et al., 2011, Song et al.,





Learning via Wasserstein-Based High Probability Generalisation Bounds

Neural Information Processing Systems

The authors contributed equally to this work 37th Conference on Neural Information Processing Systems (NeurIPS 2023). Developing upper bounds on the generalisation gap, i.e., generalisation bounds has been a longstanding topic in statistical learning.